Maximum margin equalizers trained with the Adatron algorithm

نویسندگان

  • Ignacio Santamaría
  • Rafael González Ayestarán
  • Carlos Pantaleón
  • José Carlos Príncipe
چکیده

In this paper we apply the structural risk minimization principle as an appropriate criterion to train decision feedback and transversal equalizers. We consider both linear discriminant (optimal hyperplane) and nonlinear discriminant (support vector machine) classi6ers as an alternative to the linear minimum mean-square error (MMSE) equalizer and radial basis function (RBF) networks, respectively. A fast and simple adaptive algorithm called the Adatron is applied to obtain the linear or nonlinear classi6er. In this way we avoid the high computational cost of quadratic programming. Moreover, the use of soft margin (regularized) classi6ers is proposed as a simple way to consider “noisy” channel states: this alternative improves the bit error rate, mainly at low SNR’s. Furthermore, an adaptive implementation is discussed. Some simulation examples show the advantages of the proposed linear and nonlinear equalizers: a better performance in comparison to the linear MMSE and a simpler structure in comparison to the RBF (Bayesian). ? 2002 Elsevier Science B.V. All rights reserved.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Channel equalization and the Bayes point machine

Equalizers trained with a large margin have an ability to better handle noise in unseen data and drift in the target solution. We present a method of approximating the Bayes optimal strategy which provides a large margin equalizer, the Bayes point equalizer. The method we use to estimate the Bayes point is to average equalizers that are run on independently chosen subsets of the data. To better...

متن کامل

The Kernel-Adatron Algorithm: A Fast and Simple Learning Procedure for Support Vector Machines

Support Vector Machines work by mapping training data for classiication tasks into a high dimensional feature space. In the feature space they then nd a maximal margin hyperplane which separates the data. This hyperplane is usually found using a quadratic programming routine which is computation-ally intensive, and is non trivial to implement. In this paper we propose an adaptation of the Adatr...

متن کامل

On-Line AdaTron Learning of Unlearnable Rules

We study the on-line AdaTron learning of linearly non-separable rules by a simple perceptron. Training examples are provided by a perceptron with a non-monotonic transfer function which reduces to the usual monotonic relation in a certain limit. We find that, although the on-line AdaTron learning is a powerful algorithm for the learnable rule, it does not give the best possible generalization e...

متن کامل

Adaptive blind equalizers with automatically controlled parameters

Signals when pass through a channel undergo various forms of distortion, most common of which is Inter-symbol-interference, so called ISI. Inter-symbol interference induced errors can cause the receiver to misinterpret the received samples. Equalizers are important parts of receivers, which minimizes the linear distortion produced by the channel. If channel characteristics are known a priori, t...

متن کامل

Adaptive Blind Equalization Using Bottleneck Networks Implemented by Evolvable Hardware

We propose the use of bottleneck networks implemented by evolvable hardware for the adaptive blind equalization of digital communication channels. Blind Channel equalization is a method of recovering the original symbols by compensating for channel distortion at the receiver's end without any known training sequence for the startup period. If the non-linear channel distortion is too severe to i...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • Signal Processing

دوره 83  شماره 

صفحات  -

تاریخ انتشار 2003